5 research outputs found
Leveraging Haptic Feedback to Improve Data Quality and Quantity for Deep Imitation Learning Models
Learning from demonstration (LfD) is a proven technique to teach robots new
skills. Data quality and quantity play a critical role in LfD trained model
performance. In this paper we analyze the effect of enhancing an existing
teleoperation data collection system with real-time haptic feedback; we observe
improvements in the collected data throughput and its quality for model
training. Our experiment testbed was a mobile manipulator robot that opened
doors with latch handles. Evaluation of teleoperated data collection on eight
real world conference room doors found that adding the haptic feedback improved
the data throughput by 6%. We additionally used the collected data to train six
image-based deep imitation learning models, three with haptic feedback and
three without it. These models were used to implement autonomous door-opening
with the same type of robot used during data collection. Our results show that
a policy from a behavior cloning model trained with haptic data performed on
average 11% better than its counterpart with no haptic feedback data,
indicating that haptic feedback resulted in collection of a higher quality
dataset
Gesture2Path: Imitation Learning for Gesture-aware Navigation
As robots increasingly enter human-centered environments, they must not only
be able to navigate safely around humans, but also adhere to complex social
norms. Humans often rely on non-verbal communication through gestures and
facial expressions when navigating around other people, especially in densely
occupied spaces. Consequently, robots also need to be able to interpret
gestures as part of solving social navigation tasks. To this end, we present
Gesture2Path, a novel social navigation approach that combines image-based
imitation learning with model-predictive control. Gestures are interpreted
based on a neural network that operates on streams of images, while we use a
state-of-the-art model predictive control algorithm to solve point-to-point
navigation tasks. We deploy our method on real robots and showcase the
effectiveness of our approach for the four gestures-navigation scenarios:
left/right, follow me, and make a circle. Our experiments indicate that our
method is able to successfully interpret complex human gestures and to use them
as a signal to generate socially compliant trajectories for navigation tasks.
We validated our method based on in-situ ratings of participants interacting
with the robots.Comment: 8 pages, 12 figure
Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems
As robotic systems are moved out of factory work cells into human-facing
environments questions of choreography become central to their design,
placement, and application. With a human viewer or counterpart present, a
system will automatically be interpreted within context, style of movement, and
form factor by human beings as animate elements of their environment. The
interpretation by this human counterpart is critical to the success of the
system's integration: knobs on the system need to make sense to a human
counterpart; an artificial agent should have a way of notifying a human
counterpart of a change in system state, possibly through motion profiles; and
the motion of a human counterpart may have important contextual clues for task
completion. Thus, professional choreographers, dance practitioners, and
movement analysts are critical to research in robotics. They have design
methods for movement that align with human audience perception, can identify
simplified features of movement for human-robot interaction goals, and have
detailed knowledge of the capacity of human movement. This article provides
approaches employed by one research lab, specific impacts on technical and
artistic projects within, and principles that may guide future such work. The
background section reports on choreography, somatic perspectives,
improvisation, the Laban/Bartenieff Movement System, and robotics. From this
context methods including embodied exercises, writing prompts, and community
building activities have been developed to facilitate interdisciplinary
research. The results of this work is presented as an overview of a smattering
of projects in areas like high-level motion planning, software development for
rapid prototyping of movement, artistic output, and user studies that help
understand how people interpret movement. Finally, guiding principles for other
groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for
the 21st Century)"
http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
Measuring human perceptions of expressivity in natural and artificial systems through the live performance piece Time to compile
Live performance is a vehicle where theatrical devices are used to exemplify, probe, or question how humans think about objects, each other, and themselves. This paper presents work using this vehicle to explore human perceptions of robot and human capabilities. The paper documents four performances at three distinct venues where user studies were conducted in parallel to live performance. A set of best practices for successful collection of data in this manner over the course of these trials is developed. Then, results of the studies are presented, giving insight into human opinions of a variety of natural and artificial systems. In particular, participants are asked to rate the expressivity of 12 distinct systems, displayed on stage, as well as themselves. The results show trends ranking objects lowest, then robots, then humans, then self, highest. Moreover, objects involved in the show were generally rated higher after the performance. Qualitative responses give further insight into how viewers experienced watching human performers alongside elements of technology. This work lays a framework for measuring human perceptions of robotic systems – and factors that influence this perception – inside live performance and suggests black-that through the lens of expressivity systems of similar type are rated similarly by audience members
Time to compile: A performance installation as human-robot interaction study examining self-evaluation and perceived control
Embodied art installations embed interactive elements within theatrical contexts and allow participating audience members to experience art in an active, kinesthetic manner. These experiences can exemplify, probe, or question how humans think about objects, each other, and themselves. This paper presents work using installations to explore human perceptions of robot and human capabilities. The paper documents an installation, developed over several months and activated at distinct venues, where user studies were conducted in parallel to a robotic art installation. A set of best practices for successful collection of data over the course of these trials is developed. Results of the studies are presented, giving insight into human opinions of a variety of natural and artificial systems. In particular, after experiencing the art installation, participants were more likely to attribute action of distinct system elements to non-human entities. Post treatment survey responses revealed a direct relationship between predicted difficulty and perceived success. Qualitative responses give insight into viewers’ experiences watching human performers alongside technologies. This work lays a framework for measuring human perceptions of humanoid systems – and factors that influence the perception of whether a natural or artificial agent is controlling a given movement behavior – inside robotic art installations